Incremental Active Learning for Optimal Generalization
نویسندگان
چکیده
The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models. The effectiveness of this method is shown through computer simulations. The other is the optimal sampling method in trigonometric polynomial models. This method precisely specifies the optimal sampling locations.
منابع مشابه
Concurrent Evolution of Neural Networks and Their Data Sets
The ultimate goal of designing and training a neural network is optimizing the ability to minimize the expectation of the generalization error. Because active learning techniques can be used to nd optimal complexity of network, active learning has emerged as an eÆcient alternative to improve the generalization performance of neural networks. In this paper, we propose an evolutionary approach th...
متن کاملIncremental Active Learning with Bias Reduction
The problem of designing input signals for optimal generalization in supervised learning is called active learning. In many active learning methods devised so far, the bias of the learning results is assumed to be zero. In this paper, we remove this assumption and propose a new active learning method with the bias reduction. The effectiveness of the proposed method is demonstrated through compu...
متن کاملIncremental Active Learning in Consideration of Bias
The problem of designing input signals for optimal generalization in supervised learning is called active learning. In many active learning methods devised so far, the sampling location minimizing the variance of the learning results is selected. This implies that the bias of the learning results is assumed to be zero or small enough to be neglected. In this paper, we propose an active learning...
متن کاملEarly Stopping Heuristics in Pool-Based Incremental Active Learning for Least-Squares Probabilistic Classifier
The objective of pool-based incremental active learning is to choose a sample to label from a pool of unlabeled samples in an incremental manner so that the generalization error is minimized. In this scenario, the generalization error often hits a minimum in the middle of the incremental active learning procedure and then it starts to increase. In this paper, we address the problem of early lab...
متن کاملAn Incremental Learning Algorithm That Optimizes Network Size and Sample Size in One Trial
| A constructive learning algorithm is described that builds a feedforward neural network with an optimal number of hidden units to balance convergence and generalization. The method starts with a small training set and a small network, and expands the training set incrementally after training. If the training does not converge, the network grows incrementally to increase its learning capacity....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neural computation
دوره 12 12 شماره
صفحات -
تاریخ انتشار 2000